Goto

Collaborating Authors

 constrained optimization


In-Training Multicalibrated Survival Analysis for Healthcare via Constrained Optimization

Suttaket, Thiti, Kok, Stanley

arXiv.org Artificial Intelligence

Survival analysis is an important problem in healthcare because it models the relationship between an individual's covariates and the onset time of an event of interest (e.g., death). It is important for survival models to be well-calibrated (i.e., for their predicted probabilities to be close to ground-truth probabilities) because badly calibrated systems can result in erroneous clinical decisions. Existing survival models are typically calibrated at the population level only, and thus run the risk of being poorly calibrated for one or more minority subpopulations. We propose a model called GRADUATE that achieves multicalibration by ensuring that all subpopulations are well-calibrated too. GRADUATE frames multicalibration as a constrained optimization problem, and optimizes both calibration and discrimination in-training to achieve a good balance between them. We mathematically prove that the optimization method used yields a solution that is both near-optimal and feasible with high probability. Empirical comparisons against state-of-the-art baselines on real-world clinical datasets demonstrate GRADUATE's efficacy. In a detailed analysis, we elucidate the shortcomings of the baselines vis-a-vis GRADUATE's strengths.


Cooper: A Library for Constrained Optimization in Deep Learning

Gallego-Posada, Jose, Ramirez, Juan, Hashemizadeh, Meraj, Lacoste-Julien, Simon

arXiv.org Artificial Intelligence

Cooper is an open-source package for solving constrained optimization problems involving deep learning models. Cooper implements several Lagrangian-based first-order update schemes, making it easy to combine constrained optimization algorithms with high-level features of PyTorch such as automatic differentiation, and specialized deep learning architectures and optimizers. Although Cooper is specifically designed for deep learning applications where gradients are estimated based on mini-batches, it is suitable for general non-convex continuous constrained optimization. Cooper's source code is available at https://github.com/cooper-org/cooper.


Review for NeurIPS paper: A Novel Approach for Constrained Optimization in Graphical Models

Neural Information Processing Systems

Additional Feedback: Major: - Why the SCIP solver was used as a baseline and not Gurobi? My experience suggests that Gurobi typically performs notably better and the academic license is free. It seems you unintentionally deleted a part of it. Minor: - I would suggest to use the term "volume" instead of "cost", as it makes more sense to restrict the volume and maximize the profit than to restrict the costs and maximize the profit, especially when one speaks about a knapsack (with a predefined volume). It becomes more readable, if you would write "m is the number of log-potentials, i.e. m \mathbf(f) ". x l argmax ..., x u argmin ... l155: the sentence "and a real number q such that no two functions ... share any variable..." - is ambiguous. Put "and a real number q" right before "we can construct" instead.


Review for NeurIPS paper: A Novel Approach for Constrained Optimization in Graphical Models

Neural Information Processing Systems

The paper proposes a new inference task for graphical models. It consists in finding a MAP assignment w.r.t. It contains as special cases several interesting graphical model problems like m-best assigments. The method uses a transformation to multiple choice knapsack for cmputationally solving the problem. Authors agree that the new problem is interesting and the transformation to multiple choice knapsack is interesting. The main criticism pertains to the small experiments that are not necessarily indicative of real-world problems.


Reviews: UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

Neural Information Processing Systems

The techniques seem interesting and may be useful for other analyses. I liked the use of optimism here and I think this in particular is an under-utilized type of guarantee that might have more applications. I found even the proofs in the appendices to be fairly painless to follow!



Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes

Neural Information Processing Systems

Deep neural networks (DNNs) are notorious for making more mistakes for the classes that have substantially fewer samples than the others during training. Such class imbalance is ubiquitous in clinical applications and very crucial to handle because the classes with fewer samples most often correspond to critical cases (e.g., cancer) where misclassifications can have severe consequences.Not to miss such cases, binary classifiers need to be operated at high True Positive Rates (TPRs) by setting a higher threshold, but this comes at the cost of very high False Positive Rates (FPRs) for problems with class imbalance. Existing methods for learning under class imbalance most often do not take this into account. We argue that prediction accuracy should be improved by emphasizing the reduction of FPRs at high TPRs for problems where misclassification of the positive, i.e. critical, class samples are associated with higher cost.To this end, we pose the training of a DNN for binary classification as a constrained optimization problem and introduce a novel constraint that can be used with existing loss functions to enforce maximal area under the ROC curve (AUC) through prioritizing FPR reduction at high TPR. We solve the resulting constrained optimization problem using an Augmented Lagrangian method (ALM).Going beyond binary, we also propose two possible extensions of the proposed constraint for multi-class classification problems.We present experimental results for image-based binary and multi-class classification applications using an in-house medical imaging dataset, CIFAR10, and CIFAR100.


Self-Adaptive Ising Machines for Constrained Optimization

Delacour, Corentin

arXiv.org Artificial Intelligence

Ising machines (IM) are physics-inspired alternatives to von Neumann architectures for solving hard optimization tasks. By mapping binary variables to coupled Ising spins, IMs can naturally solve unconstrained combinatorial optimization problems such as finding maximum cuts in graphs. However, despite their importance in practical applications, constrained problems remain challenging to solve for IMs that require large quadratic energy penalties to ensure the correspondence between energy ground states and constrained optimal solutions. To relax this requirement, we propose a self-adaptive IM that iteratively shapes its energy landscape using a Lagrange relaxation of constraints and avoids prior tuning of penalties. Using a probabilistic-bit (p-bit) IM emulated in software, we benchmark our algorithm with multidimensional knapsack problems (MKP) and quadratic knapsack problems (QKP), the latter being an Ising problem with linear constraints. For QKP with 300 variables, the proposed algorithm finds better solutions than state-of-the-art IMs such as Fujitsu's Digital Annealer and requires 7,500x fewer samples. Our results show that adapting the energy landscape during the search can speed up IMs for constrained optimization.


A Novel Approach for Constrained Optimization in Graphical Models

Neural Information Processing Systems

We consider the following constrained maximization problem in discrete probabilistic graphical models (PGMs). Given two (possibly identical) PGMs M_1 and M_2 defined over the same set of variables and a real number q, find an assignment of values to all variables such that the probability of the assignment is maximized w.r.t. M_1 and is smaller than q w.r.t. We show that several explanation and robust estimation queries over graphical models are special cases of this problem. We propose a class of approximate algorithms for solving this problem.


UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

Neural Information Processing Systems

We propose a novel adaptive, accelerated algorithm for the stochastic constrained convex optimization setting.Our method, which is inspired by the Mirror-Prox method, \emph{simultaneously} achieves the optimal rates for smooth/non-smooth problems with either deterministic/stochastic first-order oracles. This is done without any prior knowledge of the smoothness nor the noise properties of the problem. To the best of our knowledge, this is the first adaptive, unified algorithm that achieves the optimal rates in the constrained setting. We demonstrate the practical performance of our framework through extensive numerical experiments.